时间动作定位(TAL)旨在预测未修剪视频(即开始和结束时间)中动作实例的动作类别和时间边界。通常在大多数现有作品中都采用了完全监督的解决方案,并被证明是有效的。这些解决方案中的实际瓶颈之一是所需的大量标记培训数据。为了降低昂贵的人类标签成本,本文着重于很少调查但实用的任务,称为半监督TAL,并提出了一种有效的主动学习方法,名为Al-Stal。我们利用四个步骤来积极选择具有很高信息性的视频样本,并培训本地化模型,名为\ emph {火车,查询,注释,附加}。考虑定位模型的不确定性的两个评分函数配备了ALSTAL,从而促进了视频样本等级和选择。一个人将预测标签分布的熵作为不确定性的度量,称为时间提案熵(TPE)。另一个引入了基于相邻行动建议之间的共同信息的新指标,并评估视频样本的信息性,称为时间上下文不一致(TCI)。为了验证拟议方法的有效性,我们在两个基准数据集Thumos'14和ActivityNet 1.3上进行了广泛的实验。实验结果表明,与完全监督的学习相比,AL-Stal的表现优于现有竞争对手,并实现令人满意的表现。
translated by 谷歌翻译
很难收集足够的缺陷图像来训练工业生产中的深度学习网络。因此,现有的工业异常检测方法更喜欢使用基于CNN的无监督检测和本地化网络来实现此任务。但是,由于传统的端到端网络在高维空间中符合非线性模型的障碍,因此这些方法总是失败。此外,它们通过将正常图像的特征群群群群群群集成,这基本上是导致纹理变化不健壮的。为此,我们提出了基于视觉变压器的(基于VIT)的无监督异常检测网络。它利用层次任务学习和人类经验来增强其解释性。我们的网络包括模式生成和比较网络。模式生成网络使用两个基于VIT的编码器模块来提取两个连续图像贴片的功能,然后使用基于VIT的解码器模块来学习这些功能的人类设计样式并预测第三张图像贴片。之后,我们使用基于暹罗的网络来计算“生成图像补丁”和“原始图像补丁”的相似性。最后,我们通过双向推理策略来完善异常定位。公共数据集MVTEC数据集的比较实验显示我们的方法达到了99.8%的AUC,它超过了先前的最新方法。此外,我们在自己的皮革和布数据集上给出了定性插图。准确的片段结果强烈证明了我们方法在异常检测中的准确性。
translated by 谷歌翻译
社区检测是网络科学的基本和重要问题,但只有几个基于图形神经网络的社区检测算法,其中无监督的算法几乎是空白的。通过融合具有网络功能的高阶模块化信息,本文首次提出了基于变分AualiCoder重建的社区检测VGGAer,并给出了其非概率版本。他们不需要任何先前的信息。我们精心设计了基于社区检测任务的相应输入功能,解码器和下游任务,这些设计简洁,自然,表现良好(我们的设计下的NMI值得到59.1%-565.9%)。基于一系列具有广泛数据集和先​​进方法的一系列实验,VGAER实现了卓越的性能,并具有更简单的设计竞争力和潜力。最后,我们报告了算法收敛性分析和T-SNE可视化的结果,清楚地描绘了VGAER的稳定性能和强大的网络模块化能力。我们的代码可在https://github.com/qcydm/vgaer提供。
translated by 谷歌翻译
离线强化学习利用静态数据集来学习最佳策略,无需访问环境。由于代理商在线交互的展示和培训期间的样本数量,这种技术对于多代理学习任务是可取的。然而,在多代理强化学习(Marl)中,从未研究过在线微调的离线预训练的范式从未研究过,可以使用离线MARL研究的数据集或基准。在本文中,我们试图回答违规在Marl中的离线培训是否能够学习一般的政策表现,这些问题可以帮助提高多个下游任务的性能。我们首先引入基于Starcraftia环境的不同质量水平的第一个离线Marl数据集,然后提出了用于有效的离线学习的多代理决策变压器(MADT)的新颖体系结构。 MADT利用变换器的时间表示的建模能力,并将其与离线和在线MARL任务集成。 Madt的一个至关重要的好处是,它学会了可以在不同任务场景下不同类型的代理之间转移的可稳定性政策。当在脱机目的Datline数据上进行评估时,Madt展示了比最先进的离线RL基线的性能卓越。当应用于在线任务时,预先训练的MADT显着提高了样品效率,即使在零射击案件中也享有强大的性能。为了我们的最佳知识,这是第一个研究并展示了在Marl中的样本效率和最常性增强方面的离线预训练模型的有效性。
translated by 谷歌翻译
我们提出了一种使用流生理时间序列的端到端模型,以准确预测低氧血症的近期风险,低氧血症是一种罕见但威胁生命的疾病,已知在手术期间造成严重的患者伤害。受到以下事实的启发:低氧血症事件是根据未来观察到的低spo2(即血氧饱和度)实例定义的,我们提出的模型使对未来的低spo2实例和低氧血症结果的混合推断,并由关节序列启用同时优化标签预测的判别解码器的自动编码器,以及对数据重建和预测进行了培训的两个辅助解码器,它们无缝地学习上下文的潜在表示,这些表示捕获了当前状态之间的过渡到未来状态。所有解码器都共享一个基于内存的编码器,有助于捕获患者测量的全局动态。对于一个主要的学术医学中心进行了72,081次手术的大型手术队列,我们​​的模型优于所有基础,包括最先进的低氧预测系统使用的模型。能够以临床上可接受的警报对近期低氧事件的警报进行分辨率的实时预测,尤其是更关键的持续性低氧血症,我们提出的模型在改善临床决策和减轻围手术期的减轻负担方面有希望。
translated by 谷歌翻译
The past two decades have seen increasingly rapid advances in the field of multi-view representation learning due to it extracting useful information from diverse domains to facilitate the development of multi-view applications. However, the community faces two challenges: i) how to learn robust representations from a large amount of unlabeled data to against noise or incomplete views setting, and ii) how to balance view consistency and complementary for various downstream tasks. To this end, we utilize a deep fusion network to fuse view-specific representations into the view-common representation, extracting high-level semantics for obtaining robust representation. In addition, we employ a clustering task to guide the fusion network to prevent it from leading to trivial solutions. For balancing consistency and complementary, then, we design an asymmetrical contrastive strategy that aligns the view-common representation and each view-specific representation. These modules are incorporated into a unified method known as CLustering-guided cOntrastiVE fusioN (CLOVEN). We quantitatively and qualitatively evaluate the proposed method on five datasets, demonstrating that CLOVEN outperforms 11 competitive multi-view learning methods in clustering and classification. In the incomplete view scenario, our proposed method resists noise interference better than those of our competitors. Furthermore, the visualization analysis shows that CLOVEN can preserve the intrinsic structure of view-specific representation while also improving the compactness of view-commom representation. Our source code will be available soon at https://github.com/guanzhou-ke/cloven.
translated by 谷歌翻译
Our situated environment is full of uncertainty and highly dynamic, thus hindering the widespread adoption of machine-led Intelligent Decision-Making (IDM) in real world scenarios. This means IDM should have the capability of continuously learning new skills and efficiently generalizing across wider applications. IDM benefits from any new approaches and theoretical breakthroughs that exhibit Artificial General Intelligence (AGI) breaking the barriers between tasks and applications. Recent research has well-examined neural architecture, Transformer, as a backbone foundation model and its generalization to various tasks, including computer vision, natural language processing, and reinforcement learning. We therefore argue that a foundation decision model (FDM) can be established by formulating various decision-making tasks as a sequence decoding task using the Transformer architecture; this would be a promising solution to advance the applications of IDM in more complex real world tasks. In this paper, we elaborate on how a foundation decision model improves the efficiency and generalization of IDM. We also discuss potential applications of a FDM in multi-agent game AI, production scheduling, and robotics tasks. Finally, through a case study, we demonstrate our realization of the FDM, DigitalBrain (DB1) with 1.2 billion parameters, which achieves human-level performance over 453 tasks, including text generation, images caption, video games playing, robotic control, and traveling salesman problems. As a foundation decision model, DB1 would be a baby step towards more autonomous and efficient real world IDM applications.
translated by 谷歌翻译
Self-supervised representation learning follows a paradigm of withholding some part of the data and tasking the network to predict it from the remaining part. Towards this end, masking has emerged as a generic and powerful tool where content is withheld along the sequential dimension, e.g., spatial in images, temporal in audio, and syntactic in language. In this paper, we explore the orthogonal channel dimension for generic data augmentation. The data for each channel is quantized through a non-uniform quantizer, with the quantized value sampled randomly within randomly sampled quantization bins. From another perspective, quantization is analogous to channel-wise masking, as it removes the information within each bin, but preserves the information across bins. We apply the randomized quantization in conjunction with sequential augmentations on self-supervised contrastive models. This generic approach achieves results on par with modality-specific augmentation on vision tasks, and state-of-the-art results on 3D point clouds as well as on audio. We also demonstrate this method to be applicable for augmenting intermediate embeddings in a deep neural network on the comprehensive DABS benchmark which is comprised of various data modalities. Code is availabel at http://www.github.com/microsoft/random_quantize.
translated by 谷歌翻译
User and product information associated with a review is useful for sentiment polarity prediction. Typical approaches incorporating such information focus on modeling users and products as implicitly learned representation vectors. Most do not exploit the potential of historical reviews, or those that currently do require unnecessary modifications to model architecture or do not make full use of user/product associations. The contribution of this work is twofold: i) a method to explicitly employ historical reviews belonging to the same user/product to initialize representations, and ii) efficient incorporation of textual associations between users and products via a user-product cross-context module. Experiments on IMDb, Yelp-2013 and Yelp-2014 benchmarks show that our approach substantially outperforms previous state-of-the-art. Since we employ BERT-base as the encoder, we additionally provide experiments in which our approach performs well with Span-BERT and Longformer. Furthermore, experiments where the reviews of each user/product in the training data are downsampled demonstrate the effectiveness of our approach under a low-resource setting.
translated by 谷歌翻译
In this work, we propose an ID-preserving talking head generation framework, which advances previous methods in two aspects. First, as opposed to interpolating from sparse flow, we claim that dense landmarks are crucial to achieving accurate geometry-aware flow fields. Second, inspired by face-swapping methods, we adaptively fuse the source identity during synthesis, so that the network better preserves the key characteristics of the image portrait. Although the proposed model surpasses prior generation fidelity on established benchmarks, to further make the talking head generation qualified for real usage, personalized fine-tuning is usually needed. However, this process is rather computationally demanding that is unaffordable to standard users. To solve this, we propose a fast adaptation model using a meta-learning approach. The learned model can be adapted to a high-quality personalized model as fast as 30 seconds. Last but not the least, a spatial-temporal enhancement module is proposed to improve the fine details while ensuring temporal coherency. Extensive experiments prove the significant superiority of our approach over the state of the arts in both one-shot and personalized settings.
translated by 谷歌翻译